-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jimmy adding iteration limit convergence criterion #1862
base: devel
Are you sure you want to change the base?
Jimmy adding iteration limit convergence criterion #1862
Conversation
Job Mingw Test on 1fff889 : invalidated by @Jimmy-INL |
I know this is not the perfect setting and I am sure Paul might have better suggestions, but at least now the number of runs is predictable. |
It is important to differentiate this from sampler'sinit <limit> node as the latter counts every model evaluation, | ||
hence, if n variables we will have 2n gradient evaluation if using centeral difference, number of iteration will be <limit>/(2n). If is less than the <iterationlimit> the GA will honor this lower limit. | ||
I.e., the termination (given all other covergence criteria are not met) will occur at min{<iterationLimit>, ceil(<limit>/gradient evaluations per variable/\# of variables) +1} | ||
For instance: if the \# of Variables is 2, and we perform 2 evaluations per variable for gradients, and if <limit> = 20, and <iterationLimit> is 10, the termination will occur at min{ceiling(20/(2*2))+1,10} = 6.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some light editing, I suggest:
provides the maximum number of iterations at which the optimizer will terminate regardless of the status of any
convergence criteria. Note that for Gradient Descent each new optimal point with its nearby gradient evaluation
points all count as a single iteration. It is important to differentiate \xmlNode{iterationLimit} from the sampler's
\xmlNode{limit} node as the latter counts every model evaluation, not just successive steps in the optimization
algorithm. Hence, if $n$ variables are being optimized, the optimizer perform $n+1$ samples (in
\texttt{FiniteDifference}) per iteration. If for example we have 4 variables, and the \xmlNode{limit} is set to 100,
then at most 20 optimization iterations can be performed, since there are 5 model evaluations per optimization
iteration. If we have 4 variables and instead the \xmlNode{iterationLimit} is set to 100, then at most 100
optimization iterations can be performed, which results in 500 model evaluations. The optimizer will terminate on
the first limit it observes. \default{None}.
Is this default correct, or is there a default iteration limit?
@@ -619,6 +626,7 @@ def _checkAcceptability(self, traj, opt, optVal, info): | |||
# if this is an opt point rerun, accept it without checking. | |||
elif self._acceptRerun[traj]: | |||
acceptable = 'rerun' | |||
self.incrementIteration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this line do anything? Did you mean self.incrementIteration()
?
conv=str(converged), | ||
got=int(self.getIteration(traj)), | ||
req=int(self._convergenceCriteria['iterationLimit']))) | ||
return converged |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
newline after return
please
@@ -231,7 +231,7 @@ def localGenerateInput(self, model, inp): | |||
self.inputInfo['batchMode'] = False | |||
for _ in range(self.batch): | |||
inputInfo = {'SampledVarsPb':{}, 'batchMode':self.inputInfo['batchMode']} # ,'prefix': str(self.batchId)+'_'+str(i) | |||
if self.counter == self.limit + 1: | |||
if self.getIteration(all(self._activeTraj)) == self.limit + 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not familiar with what all
does here. Doesn't that do a binary check on each entry of self._activeTraj
? Did you want:
if all(self.getIteration(self._activeTraj) == self.limit + 1)
perhaps?
@@ -560,6 +560,8 @@ def _resolveNewOptPoint(self, traj, rlz, optVal, info): | |||
# nothing else to do but wait for the grad points to be collected | |||
elif acceptable == 'rejected': | |||
self._rejectOptPoint(traj, info, old) | |||
if self.printTag == 'GradientDescent': | |||
self.__stepCounter[traj] -= 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this concerns me; are we not counting rejected steps as iterations now? This seems like it will create indexing problems and logistical mismatches. Help me understand why I'm wrong about that.
3.51645507813,0.5,0.420926728435,0.127156575521,9.0,rerun,0.0 | ||
2.753515625,0.5,0.0958782696724,0.0847710503472,10.0,accepted,0.0 | ||
3.70718994141,0.5,0.789248108373,0.105963812934,11.0,rejected,0.0 | ||
2.753515625,0.5,0.0958782696724,0.105963812934,11.0,rerun,0.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see the duplicate step numbers here, I'm struggling to resolve that, unless we want to consider a reject and subsequent rerun as part of the same step, in which case the user input example model run count is only accurate if there are no rejected points.
I provided a few comments, but not a full review. |
Pull Request Description
What issue does this change request address? (Use "#" before the issue to link it, i.e., #42.)
Closes #1750
What are the significant changes in functionality due to this change request?
For Change Control Board: Change Request Review
The following review must be completed by an authorized member of the Change Control Board.
<internalParallel>
to True.raven/tests/framework/user_guide
andraven/docs/workshop
) have been changed, the associated documentation must be reviewed and assured the text matches the example.